"Machine Learning in Python"
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
One: Commitment to documentation and usability
One of the reasons I started using scikit-learn was because of its nice documentation (which I hold up as an example for other communities and projects to emulate).
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
Two: Models are chosen and implemented by a dedicated team of experts
Scikit-learn’s stable of contributors includes experts in machine-learning and software development.
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
Three: Covers most machine-learning tasks
Scan the list of things available in scikit-learn and you quickly realize that it includes tools for many of the standard machine-learning tasks (such as clustering, classification, regression, etc.).
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
Four: Python and Pydata
An impressive set of Python data tools (pydata) have emerged over the last few years.
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
Five: Focus
Scikit-learn is a machine-learning library. Its goal is to provide a set of common algorithms to Python users through a consistent interface.
Six reasons why Ben Lorica (@bigdata) recommends scikit-learn
Six: scikit-learn scales to most data problems
Many problems can be tackled using a single (big memory) server, and well-designed software that runs on a single machine can blow away distributed systems.
...an introduction to Python
...an introduction to machine learning
In [4]:
from sklearn import datasets
from numpy import logical_or
from sklearn.lda import LDA
from sklearn.metrics import confusion_matrix
In [5]:
iris = datasets.load_iris()
subset = logical_or(iris.target == 0, iris.target == 1)
X = iris.data[subset]
y = iris.target[subset]
In [6]:
print X[0:5,:]
In [7]:
print y[0:5]
In [8]:
# Linear Discriminant Analysis
lda = LDA(2)
lda.fit(X, y)
confusion_matrix(y, lda.predict(X))
Out[8]:
The main "interfaces" in scikit-learn are (one class can implement multiple interfaces):
Estimator:
estimator = obj.fit(data, targets)
Predictor:
prediction = obj.predict(data)
Transformer:
new_data = obj.transform(data)
Model:
score = obj.score(data)
All estimators implement the fit method:
estimator.fit(X, y)
A estimator is an object that fits a model based on some training data and is capable of inferring some properties on new data.
In [9]:
from sklearn.linear_model import LogisticRegression
In [10]:
# Create Model
model = LogisticRegression()
# Fit Model
model.fit(X, y)
Out[10]:
In [11]:
from sklearn.cluster import KMeans
In [12]:
# Create Model
kmeans = KMeans(n_clusters = 2)
# Fit Model
kmeans.fit(X)
Out[12]:
In [13]:
from sklearn.decomposition import PCA
In [14]:
# Create Model
pca = PCA(n_components=2)
# Fit Model
pca.fit(X)
Out[14]:
The fit method takes a $y$ parameter even if it isn't needed (though $y$ is ignored). This is important later.
In [15]:
from sklearn.decomposition import PCA
In [16]:
pca = PCA(n_components=2)
pca.fit(X, y)
Out[16]:
In [17]:
from sklearn.feature_selection import SelectKBest
from sklearn.metrics import matthews_corrcoef
In [18]:
# Create Model
kbest = SelectKBest(k = 3)
# Fit Model
kbest.fit(X, y)
Out[18]:
In [83]:
model = LogisticRegression()
model.fit(X, y)
kbest = SelectKBest(k = 1)
kbest.fit(X, y)
kmeans = KMeans(n_clusters = 2)
kmeans.fit(X, y)
pca = PCA(n_components=2)
pca.fit(X, y)
Out[83]:
What can we do with an estimator?
Inference!
In [19]:
model = LogisticRegression()
model.fit(X, y)
print model.coef_
In [20]:
kmeans = KMeans(n_clusters = 2)
kmeans.fit(X)
print kmeans.cluster_centers_
In [21]:
pca = PCA(n_components=2)
pca.fit(X, y)
print pca.explained_variance_
In [22]:
kbest = SelectKBest(k = 1)
kbest.fit(X, y)
print kbest.get_support()
Is that it?
In [23]:
model = LogisticRegression()
model.fit(X, y)
X_test = [[ 5.006, 3.418, 1.464, 0.244], [ 5.936, 2.77 , 4.26 , 1.326]]
model.predict(X_test)
Out[23]:
In [24]:
print model.predict_proba(X_test)
In [25]:
pca = PCA(n_components=2)
pca.fit(X)
print pca.transform(X)[0:5,:]
fit_transform is also available (and is sometimes faster).
In [54]:
pca = PCA(n_components=2)
print pca.fit_transform(X)[0:5,:]
In [26]:
kbest = SelectKBest(k = 1)
kbest.fit(X, y)
print kbest.transform(X)[0:5,:]
In [27]:
from sklearn.cross_validation import KFold
from numpy import arange
from random import shuffle
from sklearn.dummy import DummyClassifier
In [86]:
model = DummyClassifier()
model.fit(X, y)
model.score(X, y)
Out[86]:
In [87]:
from sklearn.pipeline import Pipeline
In [55]:
pipe = Pipeline([
("select", SelectKBest(k = 3)),
("pca", PCA(n_components = 1)),
("classify", LogisticRegression())
])
pipe.fit(X, y)
pipe.predict(X)
Out[55]:
Intermediate steps of the pipeline must be Estimators and Transformers.
The final estimator needs only to be an Estimator.
In [78]:
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.linear_model import SGDClassifier
In [71]:
news = fetch_20newsgroups()
data = news.data
category = news.target
In [72]:
len(data)
Out[72]:
In [92]:
print " ".join(news.target_names)
In [99]:
print data[8]
In [100]:
pipe = Pipeline([
('vect', CountVectorizer(max_features = 100)),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier()),
])
pipe.fit(data, category)
Out[100]:
In [107]:
import pandas as pd
import numpy as np
import sklearn.preprocessing, sklearn.decomposition, sklearn.linear_model, sklearn.pipeline, sklearn.metrics
from sklearn_pandas import DataFrameMapper, cross_val_score
In [117]:
data = pd.DataFrame({
'pet': ['cat', 'dog', 'dog', 'fish', 'cat', 'dog', 'cat', 'fish'],
'children': [4., 6, 3, 3, 2, 3, 5, 4],
'salary': [90, 24, 44, 27, 32, 59, 36, 27]
})
In [111]:
mapper = DataFrameMapper([
('pet', sklearn.preprocessing.LabelBinarizer()),
('children', sklearn.preprocessing.StandardScaler()),
('salary', None)
])
In [113]:
mapper.fit_transform(data)
Out[113]:
In [157]:
mapper = DataFrameMapper([
('pet', sklearn.preprocessing.LabelBinarizer()),
('children', sklearn.preprocessing.StandardScaler()),
('salary', None)
])
pipe = Pipeline([
("mapper", mapper),
("pca", PCA(n_components=2))
])
pipe.fit_transform(data) # 'data' is a data frame, not a numpy array!
Out[157]:
Pandas pipelines require sklearn-pandas module by @paulgb.
In [212]:
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
In [137]:
# Create sample dataset
X, y = datasets.make_classification(n_samples = 1000, n_features = 40, n_informative = 6, n_classes = 2)
In [162]:
# Pipeline for Feature Selection to Random Forest
pipe = Pipeline([
("select", SelectKBest()),
("classify", RandomForestClassifier())
])
In [175]:
# Define parameter grid
param_grid = {
"select__k" : [1, 6, 20, 40],
"classify__n_estimators" : [1, 10, 100],
}
gs = GridSearchCV(pipe, param_grid)
In [183]:
# Search over grid
gs.fit(X, y)
gs.best_params_
Out[183]:
In [192]:
print gs.best_estimator_.predict(X.mean(axis = 0))
Search space grows exponentially with number of parameters.
In [185]:
gs.grid_scores_
Out[185]:
GridSearch on 1 core:
In [207]:
param_grid = {
"select__k" : [1, 5, 10, 15, 20, 25, 30, 35, 40],
"classify__n_estimators" : [1, 5, 10, 25, 50, 75, 100],
}
gs = GridSearchCV(pipe, param_grid, n_jobs = 1)
%timeit gs.fit(X, y)
print
GridSearch on 7 cores:
In [208]:
gs = GridSearchCV(pipe, param_grid, n_jobs = 7)
%timeit gs.fit(X, y)
print
GridSearchCV might be very slow:
In [220]:
param_grid = {
"select__k" : range(1, 40),
"classify__n_estimators" : range(1, 100),
}
In [221]:
gs = GridSearchCV(pipe, param_grid, n_jobs = 7)
gs.fit(X, y)
print "Best CV score", gs.best_score_
print gs.best_params_
We can instead randomly sample from the parameter space with RandomizedSearchCV:
In [229]:
gs = RandomizedSearchCV(pipe, param_grid, n_jobs = 7, n_iter = 10)
gs.fit(X, y)
print "Best CV score", gs.best_score_
print gs.best_params_